Goto

Collaborating Authors

 learning markov chain


On Learning Markov Chains

Neural Information Processing Systems

The problem of estimating an unknown discrete distribution from its samples is a fundamental tenet of statistical learning. Over the past decade, it attracted significant research effort and has been solved for a variety of divergence measures. Surprisingly, an equally important problem, estimating an unknown Markov chain from its samples, is still far from understood. We consider two problems related to the min-max risk (expected loss) of estimating an unknown k-state Markov chain from its n sequential samples: predicting the conditional distribution of the next sample with respect to the KL-divergence, and estimating the transition matrix with respect to a natural loss induced by KL or a more general f-divergence measure. For the first measure, we determine the min-max prediction risk to within a linear factor in the alphabet size, showing it is \Omega(k\log\log n/n) and O(k^2\log\log n/n). For the second, if the transition probabilities can be arbitrarily small, then only trivial uniform risk upper bounds can be derived. We therefore consider transition probabilities that are bounded away from zero, and resolve the problem for essentially all sufficiently smooth f-divergences, including KL-, L_2-, Chi-squared, Hellinger, and Alpha-divergences.


Reviews: On Learning Markov Chains

Neural Information Processing Systems

Summary: The paper's goal is to study the minimax rates for learning problems on Markovian data. The author(s) consider an interesting setting where the sequence of data observed follow a Markovian dependency pattern. They consider discrete state Markov chains with a state space [k] and study the minimax error rates for the following two tasks: Prediction: Given a trajectory X_1 - X_2 … - X_n from an unknown chain M, predict the probability distribution of the next state X_n 1, i.e., P(. X_1…n)) ] where the expectation is over the trajectory X_1…X_n. The loss function the paper focuses on is KL-divergence and presents a conjecture for how the L_1 loss should scale with respect to k and n.


On Learning Markov Chains

Hao, Yi, Orlitsky, Alon, Pichapati, Venkatadheeraj

Neural Information Processing Systems

The problem of estimating an unknown discrete distribution from its samples is a fundamental tenet of statistical learning. Over the past decade, it attracted significant research effort and has been solved for a variety of divergence measures. Surprisingly, an equally important problem, estimating an unknown Markov chain from its samples, is still far from understood. We consider two problems related to the min-max risk (expected loss) of estimating an unknown k-state Markov chain from its n sequential samples: predicting the conditional distribution of the next sample with respect to the KL-divergence, and estimating the transition matrix with respect to a natural loss induced by KL or a more general f-divergence measure. For the first measure, we determine the min-max prediction risk to within a linear factor in the alphabet size, showing it is \Omega(k\log\log n/n) and O(k 2\log\log n/n).